Weekly Insights
AI washing: signs, symptoms and solutions for investment stakeholders
Joseph Simonian
Some firms may use buzzwords such as “AI-driven” or “machine learning-enabled” without truly integrating these tools into their investment processes. PHOTO: UNSPLASH

 

How to spot both genuine artificial intelligence use and inflated claims in the marketplace

 

THE rapid rise of artificial intelligence (AI) in finance has brought real innovation but also misleading marketing claims. Many financial services firms feel pressure to appear tech-savvy to stay competitive.

Although some firms genuinely apply machine learning (ML) and AI to improve investing, others make claims that do not match reality. These firms may use buzzwords such as “AI-driven” or “machine learning-enabled” without truly integrating these tools into their investment processes.

Consequently, clients and investors may be misled into believing they are investing in innovative, cutting-edge strategies when they are not.

This phenomenon is known as “AI washing” – the act of falsely or overly inflating claims about the use of AI in financial products or services.

The CFA Institute recently published AI Washing: Signs, Symptoms and Suggested Solutions – a report that examines what AI washing is, why firms engage in it, how it affects clients and the broader development of AI, touching on the ethical, regulatory and technical measures that can help address it. It also offers guidance to asset owners on how to spot both genuine AI use and inflated claims in the marketplace.

According to Nvidia’s State of AI in Financial Services: 2025 Trends report, 57 per cent of respondents in a global survey of financial professionals are using or considering AI for data analytics, and generative AI usage has risen sharply to 52 per cent from 40 per cent in 2023.

In addition, 37 per cent report AI-driven operational efficiencies, and 32 per cent believe AI offers a competitive advantage. The use of AI in trading and portfolio optimisation has increased to 38 per cent from 15 per cent, while its application in pricing, risk management and underwriting has grown to 32 per cent from 13 per cent.

Barriers to AI adoption

True AI in finance involves systems that process large data sets, learn patterns and make decisions – such as predicting asset prices or optimising portfolios. These efforts require serious investment in talent, technology and time.

Many investment firms, however, either lack the resources or are unwilling to overhaul their existing processes to meaningfully incorporate AI. Instead, they may add small AI elements (such as using a chatbot or large language model) but advertise their strategy as “AI-powered”, which is deceptive if these tools do not play a central role.

Barriers to real AI adoption in investing are high. Financial data is often messy, sparse and hard to predict. Unlike other industries where data is more abundant and easier to model, investment forecasting requires handling noisy, volatile and complex inputs. Consequently, many firms hesitate to disrupt their existing models that already perform well.

AI washing is particularly dangerous because it undermines explainable AI – a movement focused on making AI systems more transparent, understandable and trustworthy – especially for non-technical users. If firms exaggerate or hide how they use AI, it becomes harder for stakeholders to assess the real value or risks of these tools.

The CFA Institute report asserts that investors deserve transparency about what technologies are being used, how they work and whether they deliver value. Firms should avoid overhyping their use of AI just to attract clients or compete with rivals. Instead, they should be transparent about how they use AI, what it adds to their process and what limitations exist.

Asset managers or asset owners must be able to provide sufficient detail regarding why and how they implement AI technology in their process, what specific frameworks they use, and what results or improvements they observe from using AI.

This recommendation is in line with the ethical principles of transparency and duty to clients as set out in the CFA Institute Code of Ethics and Standards of Professional Conduct.

Asking the right questions to spot AI washing

To help stakeholders – especially asset owners and prospective clients – spot AI washing, the CFA Institute report suggests a range of questions that asset owners and prospective clients can pose to asset managers that claim to use AI.

Some of these questions demand some level of technical familiarity with AI and ML, which speaks to the fact that asset owners themselves must develop some minimal competence in AI methodologies.

Below is a list of pertinent questions for consideration:

  • Can you specify what type of algorithm or combination of algorithms you are using and how it enhances the forecasting of asset returns?
  • How does your AI-driven model outperform simpler models? Can you provide a quantitative comparison of relevant performance metrics?
  • What data sources are you using to train your model(s), and how do these sources integrate with the rest of your process, if at all? Are you using alternative data, such as satellite imagery or sentiment analysis of earnings calls?
  • What preprocessing and feature selection techniques are used to prepare the raw data for input into your model(s)? Do you use fundamental features, such as earnings surprise, price momentum, or other signals and indicators?
  • Do you standardise or normalise the input features, and what techniques do you use to handle missing data, outliers and limited data sets?
  • How do you maximise model interpretability? Is it through model choice or post-implementation communications? If the latter, can you give some concrete examples?
  • Can you provide an example of a recent investment decision that was influenced by the model’s output? How was the rationale for that decision explained to the investment team?
  • How do you validate the robustness of the models you develop? What precautions do you take to guard against overfitting? For example, how do you tune hyperparameters in your models? How do you monitor model drift, and what mechanisms are in place to retrain the models and/or adapt to shifts in the market landscape?
  • What governance structures are in place to ensure the responsible use of AI firmwide? Do you have an internal AI audit process, and how often are the models reviewed for compliance with generally accepted standards and protocols?
  • If you use outsourcing for some or all of your AI technology needs, what processes are in place to ensure the quality and robustness of the services and products used in your investment process?

Transparency is non-negotiable

Firms selling financial products should conform to the same standards of transparency that stakeholders demand from other types of products. This idea applies to the use of AI technology as well.

Unfortunately, because of AI’s headline-grabbing popularity, some investment firms may rush to exaggerate their success in applying AI technologies to their investment processes. Such instances of AI washing have increasingly become the subject of heightened scrutiny from the investment community, including regulators.

By understanding and learning to detect AI washing, stakeholders can help minimise and eventually eliminate this phenomenon, resulting in better investment outcomes.

The writer is a senior affiliate researcher with CFA Institute and author of the report AI Washing: Signs, Symptoms and Suggested Solutions. He is currently the co-editor of The Journal of Financial Data Science, on the editorial board of The Journal of Portfolio Management and a member of the board of directors of the Financial Data Professional Institute.

This content has been adapted from an article that first appeared on the CFA Institute Research & Policy Center’s website.

Source: The Business Times